Goto

Collaborating Authors

 Manassas


Enhancing Fluorescence Lifetime Parameter Estimation Accuracy with Differential Transformer Based Deep Learning Model Incorporating Pixelwise Instrument Response Function

Erbas, Ismail, Pandey, Vikas, Nizam, Navid Ibtehaj, Yuan, Nanxue, Verma, Amit, Barosso, Margarida, Intes, Xavier

arXiv.org Artificial Intelligence

Fluorescence Lifetime Imaging (FLI) is a critical molecular imaging modality that provides unique information about the tissue microenvironment, which is invaluable for biomedical applications. FLI operates by acquiring and analyzing photon time-of-arrival histograms to extract quantitative parameters associated with temporal fluorescence decay. These histograms are influenced by the intrinsic properties of the fluorophore, instrument parameters, time-of-flight distributions associated with pixel-wise variations in the topographic and optical characteristics of the sample. Recent advancements in Deep Learning (DL) have enabled improved fluorescence lifetime parameter estimation. However, existing models are primarily designed for planar surface samples, limiting their applicability in translational scenarios involving complex surface profiles, such as \textit{in-vivo} whole-animal or imaged guided surgical applications. To address this limitation, we present MFliNet (Macroscopic FLI Network), a novel DL architecture that integrates the Instrument Response Function (IRF) as an additional input alongside experimental photon time-of-arrival histograms. Leveraging the capabilities of a Differential Transformer encoder-decoder architecture, MFliNet effectively focuses on critical input features, such as variations in photon time-of-arrival distributions. We evaluate MFliNet using rigorously designed tissue-mimicking phantoms and preclinical in-vivo cancer xenograft models. Our results demonstrate the model's robustness and suitability for complex macroscopic FLI applications, offering new opportunities for advanced biomedical imaging in diverse and challenging settings.


em Bones and All /em Is Clearance-Rack Grand Guignol

Slate

I'm writing this post from the guest room in my mom's house, which is peppered with old knick-knacks of mine--to summon the spirit of my childhood room, I suppose. While flipping through my photo albums, I was tickled to find a blurry picture of the poster for Phone Booth, clearly taken by me on a disposable camera outside of a movie theater. I was probably too young to be watching a gunman thriller--thanks, Mom--but I'm pretty sure my affection for it had a lot to do with Colin Farrell, who was a relative unknown when that movie came out in 2002. To this day, I'm a bit gaga over him, though I think part of the reason my puppy love has turned into something more enduring is that, as I've gotten older and my tastes have evolved, so has the actor's persona. Not to downplay his macho heartthrob phase in the aughts--I still go catatonic whenever I think about him salsa dancing in Miami Vice, and I sense noted MV-heads Bilge and David feel the same way--but it has been a delight to see him take on increasingly stranger, more cerebral roles for directors like Yorgos Lanthimos and Sofia Coppola while also pushing himself, unafraid to get ugly and unhinged, in blockbusters like The Batman.


Assessing thermal imagery integration into object detection methods on ground-based and air-based collection platforms

Gallagher, James, Oughton, Edward

arXiv.org Artificial Intelligence

Object detection models commonly deployed on uncrewed aerial systems (UAS) focus on identifying objects in the visible spectrum using Red-Green-Blue (RGB) imagery. However, there is growing interest in fusing RGB with thermal long wave infrared (LWIR) images to increase the performance of object detection machine learning (ML) models. Currently LWIR ML models have received less research attention, especially for both ground- and air-based platforms, leading to a lack of baseline performance metrics evaluating LWIR, RGB and LWIR-RGB fused object detection models. Therefore, this research contributes such quantitative metrics to the literature. The results found that the ground-based blended RGB-LWIR model exhibited superior performance compared to the RGB or LWIR approaches, achieving a mAP of 98.4%. Additionally, the blended RGB-LWIR model was also the only object detection model to work in both day and night conditions, providing superior operational capabilities. This research additionally contributes a novel labelled training dataset of 12,600 images for RGB, LWIR, and RGB-LWIR fused imagery, collected from ground-based and air-based platforms, enabling further multispectral machine-driven object detection research.


'They track every move': how US parole apps created digital prisoners

The Guardian

In 2018, William Frederick Keck III pleaded guilty in a court in Manassas, Virginia, to possession with intent to distribute cannabis. He served three months in prison, then began a three-year probation. He was required to wear a GPS ankle monitor before his trial and then to report for random drug tests after his release. Eventually, the state reduced his level of monitoring to scheduled meetings with his parole officer. Finally, after continued good behaviour, Keck's parole officer moved him to Virginia's lowest level of monitoring: an app on his smartphone.


NuCLS: A scalable crowdsourcing, deep learning approach and dataset for nucleus classification, localization and segmentation

Amgad, Mohamed, Atteya, Lamees A., Hussein, Hagar, Mohammed, Kareem Hosny, Hafiz, Ehab, Elsebaie, Maha A. T., Alhusseiny, Ahmed M., AlMoslemany, Mohamed Atef, Elmatboly, Abdelmagid M., Pappalardo, Philip A., Sakr, Rokia Adel, Mobadersany, Pooya, Rachid, Ahmad, Saad, Anas M., Alkashash, Ahmad M., Ruhban, Inas A., Alrefai, Anas, Elgazar, Nada M., Abdulkarim, Ali, Farag, Abo-Alela, Etman, Amira, Elsaeed, Ahmed G., Alagha, Yahya, Amer, Yomna A., Raslan, Ahmed M., Nadim, Menatalla K., Elsebaie, Mai A. T., Ayad, Ahmed, Hanna, Liza E., Gadallah, Ahmed, Elkady, Mohamed, Drumheller, Bradley, Jaye, David, Manthey, David, Gutman, David A., Elfandy, Habiba, Cooper, Lee A. D.

arXiv.org Artificial Intelligence

High-resolution mapping of cells and tissue structures provides a foundation for developing interpretable machine-learning models for computational pathology. Deep learning algorithms can provide accurate mappings given large numbers of labeled instances for training and validation. Generating adequate volume of quality labels has emerged as a critical barrier in computational pathology given the time and effort required from pathologists. In this paper we describe an approach for engaging crowds of medical students and pathologists that was used to produce a dataset of over 220,000 annotations of cell nuclei in breast cancers. We show how suggested annotations generated by a weak algorithm can improve the accuracy of annotations generated by non-experts and can yield useful data for training segmentation algorithms without laborious manual tracing. We systematically examine interrater agreement and describe modifications to the MaskRCNN model to improve cell mapping. We also describe a technique we call Decision Tree Approximation of Learned Embeddings (DTALE) that leverages nucleus segmentations and morphologic features to improve the transparency of nucleus classification models. The annotation data produced in this study are freely available for algorithm development and benchmarking at: https://sites.google.com/view/nucls .


Network Medicine Framework for Identifying Drug Repurposing Opportunities for COVID-19

Gysi, Deisy Morselli, Valle, Ítalo Do, Zitnik, Marinka, Ameli, Asher, Gan, Xiao, Varol, Onur, Ghiassian, Susan Dina, Patten, JJ, Davey, Robert, Loscalzo, Joseph, Barabási, Albert-László

arXiv.org Machine Learning

The current pandemic has highlighted the need for methodologies that can quickly and reliably prioritize clinically approved compounds for their potential effectiveness for SARS-CoV-2 infections. In the past decade, network medicine has developed and validated multiple predictive algorithms for drug repurposing, exploiting the sub-cellular network-based relationship between a drug's targets and disease genes. Here, we deployed algorithms relying on artificial intelligence, network diffusion, and network proximity, tasking each of them to rank 6,340 drugs for their expected efficacy against SARS-CoV-2. To test the predictions, we used as ground truth 918 drugs that had been experimentally screened in VeroE6 cells, and the list of drugs under clinical trial, that capture the medical community's assessment of drugs with potential COVID-19 efficacy. We find that while most algorithms offer predictive power for these ground truth data, no single method offers consistently reliable outcomes across all datasets and metrics. This prompted us to develop a multimodal approach that fuses the predictions of all algorithms, showing that a consensus among the different predictive methods consistently exceeds the performance of the best individual pipelines. We find that 76 of the 77 drugs that successfully reduced viral infection do not bind the proteins targeted by SARS-CoV-2, indicating that these drugs rely on network-based actions that cannot be identified using docking-based strategies. These advances offer a methodological pathway to identify repurposable drugs for future pathogens and neglected diseases underserved by the costs and extended timeline of de novo drug development.


Using Machine Learning In Fabs

#artificialintelligence

Amid the shift towards more complex chips at advanced nodes, many chipmakers are exploring or turning to advanced forms of machine learning to help solve some big challenges in IC production. A subset of artificial intelligence (AI), machine learning, uses advanced algorithms in systems to recognize patterns in data as well as to learn and make predictions about the information. In the fab, machine learning promises to provide faster and more accurate results in select areas, such as finding and classifying defects in chips. Machine learning also is used in other process steps, but there are still some challenges to deploy it. It has been used in computing and other fields for decades. It first appeared in semiconductor production in the 1990s. Some saw it as a way to help automate the steps for some manually-driven fab equipment. Over time, machine learning has made staggering progress in computing and elsewhere.


Boeing's Autonomous Taxi Takes Flight

WSJ.com: WSJD - Technology

Recent flight stoppages involving remote-controlled drones have highlighted the potential for self-flying vehicles to interfere with existing commercial aviation. The autonomous airborne vehicles under development generally take off and land like a small helicopter that could carry a handful of passengers. They would shuttle between predetermined sites, such as building rooftops. Boeing said its electric-powered concept demonstrator, designed to have a range of 50 miles, flew for the first time on Tuesday in Manassas, Va. The 30-foot-long, 28-foot-wide aircraft took off, hovered and landed, the company said.


Spectrum internet down again after series of outages linked to California wildfires

The Independent - Tech

Spectrum's internet has stopped working again, after a series of outages blamed on the ongoing California wildfires. The company has suffered repeated outages in the wake of the deadly fires, which it said had caused problems across Southern California. The latest problems have come from across the country: users from as far afield as Kentucky and Ohio have reported issues to Spectrum's Twitter account. As such, it isn't clear if the latest outages are related only to the wildfires. The I.F.O. is fuelled by eight electric engines, which is able to push the flying object to an estimated top speed of about 120mph.


OnePlus 6T review: A deeply appealing device from this plucky Chinese smartphone maker

The Independent - Tech

I've been using the OnePlus 6T for almost a month now, and I really like it. It's very much a continuation of the styling OnePlus used on the OnePlus 6 earlier in the year. That was the first to break with the metal-back styling of phones since the OnePlus 3 and opt for a glass enclosure front and rear. This looks as smart and stylish as the last model – which is very stylish, by the way – and although it's a little thicker from front to back (8.2mm instead of the OnePlus 6's 7.8mm), it feels very comfortable in the hand, not least because it's narrower than the last model. Still, it squeezes in a bigger 6.4-inch display because the screen proportions are different: This is a 19.5:9 ratio screen.